Goto

Collaborating Authors

 convergent policy optimization


Convergent Policy Optimization for Safe Reinforcement Learning

Neural Information Processing Systems

We study the safe reinforcement learning problem with nonlinear function approximation, where policy optimization is formulated as a constrained optimization problem with both the objective and the constraint being nonconvex functions. For such a problem, we construct a sequence of surrogate convex constrained optimization problems by replacing the nonconvex functions locally with convex quadratic functions obtained from policy gradient estimators. We prove that the solutions to these surrogate problems converge to a stationary point of the original nonconvex problem. Furthermore, to extend our theoretical results, we apply our algorithm to examples of optimal control and multi-agent reinforcement learning with safety constraints.


Reviews: Convergent Policy Optimization for Safe Reinforcement Learning

Neural Information Processing Systems

Quality 4 - 5 Overall 5 - 6 Overall, this seems like a nice paper, but I found it hard to evaluate given my background. I also with the authors had given some intuition for the theoretical properties of their method. My main concerns are over the originality (it seems very similar to [34]), and the weakness of the experiments. Originality: 5/10 This paper seems mostly to be about transferring the more general result of [34] to the specific setting of constrained MDPs. So I wish the authors gave more attention to [34], specifically: - reviewing the contribution of [34] in more detail - clarifying the novelty of this work (Is it in the specific design choices?


Reviews: Convergent Policy Optimization for Safe Reinforcement Learning

Neural Information Processing Systems

The reviewers found that the problem addressed in this paper is interesting. While they had some concerns regarding the overlap with prior work, these concerns were mostly addressed in the rebuttal and some reviewers therefore raised their score.


Convergent Policy Optimization for Safe Reinforcement Learning

Neural Information Processing Systems

We study the safe reinforcement learning problem with nonlinear function approximation, where policy optimization is formulated as a constrained optimization problem with both the objective and the constraint being nonconvex functions. For such a problem, we construct a sequence of surrogate convex constrained optimization problems by replacing the nonconvex functions locally with convex quadratic functions obtained from policy gradient estimators. We prove that the solutions to these surrogate problems converge to a stationary point of the original nonconvex problem. Furthermore, to extend our theoretical results, we apply our algorithm to examples of optimal control and multi-agent reinforcement learning with safety constraints.